小行星主带通过平均动力和世俗共振的网络越过,这在小行星和行星的基本频率之间具有相当性时发生。传统上,这些对象是通过视觉检查其共鸣论点的时间演变来识别的,它们是小行星和扰动星球的轨道元素的结合。由于在某些情况下,受这些共振影响的小行星人口是数千个的顺序,因此对于人类观察者来说,这已成为一项纳税任务。最近的作品使用卷积神经网络(CNN)模型自动执行此类任务。在这项工作中,我们将此类模型的结果与一些最先进和可公开的CNN体​​系结构(如VGG,Inception和Resnet)进行了比较。首先使用验证集和一系列正规化技术(例如数据扩展,辍学和批处理标准)进行测试和优化此类模型的性能。然后使用三个最佳模型来预测包含数千张图像的较大测试数据库的标签。事实证明,有和没有正规化的VGG模型是预测大型数据集标签的最有效方法。由于Vera C. Rubin天文台在未来几年内可能会发现多达四百万个新的小行星,因此这些模型的使用可能会非常有价值,以识别共鸣的次要人群。
translated by 谷歌翻译
State-of-the-art brain tumor segmentation is based on deep learning models applied to multi-modal MRIs. Currently, these models are trained on images after a preprocessing stage that involves registration, interpolation, brain extraction (BE, also known as skull-stripping) and manual correction by an expert. However, for clinical practice, this last step is tedious and time-consuming and, therefore, not always feasible, resulting in skull-stripping faults that can negatively impact the tumor segmentation quality. Still, the extent of this impact has never been measured for any of the many different BE methods available. In this work, we propose an automatic brain tumor segmentation pipeline and evaluate its performance with multiple BE methods. Our experiments show that the choice of a BE method can compromise up to 15.7% of the tumor segmentation performance. Moreover, we propose training and testing tumor segmentation models on non-skull-stripped images, effectively discarding the BE step from the pipeline. Our results show that this approach leads to a competitive performance at a fraction of the time. We conclude that, in contrast to the current paradigm, training tumor segmentation models on non-skull-stripped images can be the best option when high performance in clinical practice is desired.
translated by 谷歌翻译
Understanding deep learning model behavior is critical to accepting machine learning-based decision support systems in the medical community. Previous research has shown that jointly using clinical notes with electronic health record (EHR) data improved predictive performance for patient monitoring in the intensive care unit (ICU). In this work, we explore the underlying reasons for these improvements. While relying on a basic attention-based model to allow for interpretability, we first confirm that performance significantly improves over state-of-the-art EHR data models when combining EHR data and clinical notes. We then provide an analysis showing improvements arise almost exclusively from a subset of notes containing broader context on patient state rather than clinician notes. We believe such findings highlight deep learning models for EHR data to be more limited by partially-descriptive data than by modeling choice, motivating a more data-centric approach in the field.
translated by 谷歌翻译
Evaluating new techniques on realistic datasets plays a crucial role in the development of ML research and its broader adoption by practitioners. In recent years, there has been a significant increase of publicly available unstructured data resources for computer vision and NLP tasks. However, tabular data -- which is prevalent in many high-stakes domains -- has been lagging behind. To bridge this gap, we present Bank Account Fraud (BAF), the first publicly available privacy-preserving, large-scale, realistic suite of tabular datasets. The suite was generated by applying state-of-the-art tabular data generation techniques on an anonymized,real-world bank account opening fraud detection dataset. This setting carries a set of challenges that are commonplace in real-world applications, including temporal dynamics and significant class imbalance. Additionally, to allow practitioners to stress test both performance and fairness of ML methods, each dataset variant of BAF contains specific types of data bias. With this resource, we aim to provide the research community with a more realistic, complete, and robust test bed to evaluate novel and existing methods.
translated by 谷歌翻译
Neural networks trained on large datasets by minimizing a loss have become the state-of-the-art approach for resolving data science problems, particularly in computer vision, image processing and natural language processing. In spite of their striking results, our theoretical understanding about how neural networks operate is limited. In particular, what are the interpolation capabilities of trained neural networks? In this paper we discuss a theorem of Domingos stating that "every machine learned by continuous gradient descent is approximately a kernel machine". According to Domingos, this fact leads to conclude that all machines trained on data are mere kernel machines. We first extend Domingo's result in the discrete case and to networks with vector-valued output. We then study its relevance and significance on simple examples. We find that in simple cases, the "neural tangent kernel" arising in Domingos' theorem does provide understanding of the networks' predictions. Furthermore, when the task given to the network grows in complexity, the interpolation capability of the network can be effectively explained by Domingos' theorem, and therefore is limited. We illustrate this fact on a classic perception theory problem: recovering a shape from its boundary.
translated by 谷歌翻译
High-quality traffic flow generation is the core module in building simulators for autonomous driving. However, the majority of available simulators are incapable of replicating traffic patterns that accurately reflect the various features of real-world data while also simulating human-like reactive responses to the tested autopilot driving strategies. Taking one step forward to addressing such a problem, we propose Realistic Interactive TrAffic flow (RITA) as an integrated component of existing driving simulators to provide high-quality traffic flow for the evaluation and optimization of the tested driving strategies. RITA is developed with fidelity, diversity, and controllability in consideration, and consists of two core modules called RITABackend and RITAKit. RITABackend is built to support vehicle-wise control and provide traffic generation models from real-world datasets, while RITAKit is developed with easy-to-use interfaces for controllable traffic generation via RITABackend. We demonstrate RITA's capacity to create diversified and high-fidelity traffic simulations in several highly interactive highway scenarios. The experimental findings demonstrate that our produced RITA traffic flows meet all three design goals, hence enhancing the completeness of driving strategy evaluation. Moreover, we showcase the possibility for further improvement of baseline strategies through online fine-tuning with RITA traffic flows.
translated by 谷歌翻译
在处理小型数据集上的临床文本分类时,最近的研究证实,经过调整的多层感知器的表现优于其他生成分类器,包括深度学习。为了提高神经网络分类器的性能,可以有效地使用学习表示的功能选择。但是,大多数特征选择方法仅估计变量之间的线性依赖性程度,并根据单变量统计测试选择最佳特征。此外,学习表示所涉及的特征空间的稀疏性被忽略了。目标:因此,我们的目标是通过压缩临床代表性空间来访问一种替代方法来解决稀疏性,在这种情况下,法国临床笔记也可以有效地处理有限的法国临床笔记。方法:本研究提出了一种自动编码器学习算法来利用临床注释表示的稀疏性。动机是通过降低临床音符表示特征空间的维度来确定如何压缩稀疏的高维数据。然后在受过训练和压缩的特征空间中评估分类器的分类性能。结果:建议的方法为每种评估提供了高达3%的总体绩效增长。最后,分类器在检测患者病情时达到了92%的准确性,91%的召回,91%的精度和91%的F1得分。此外,通过应用理论信息瓶颈框架来证明压缩工作机制和自动编码器预测过程。
translated by 谷歌翻译
自动预测主观听力测试的结果是一项具有挑战性的任务。即使听众之间的偏好是一致的,评分也可能因人而异。虽然先前的工作重点是预测单个刺激的听众评分(平均意见分数),但我们专注于预测主观偏好的更简单任务,即给出了两个语音刺激的同一文本。我们提出了一个基于抗对称双神经网络的模型,该模型是在波形对及其相应偏好分数上训练的。我们探索了注意力和复发性神经网,以说明一对刺激不符合时间的事实。为了获得大型训练集,我们将听众的评分从Mushra测试转换为反映这对中一种刺激的频率高于另一个刺激的值。具体而言,我们评估了从五年内进行的十二个Mushra评估获得的数据,其中包含不同扬声器数据的不同TTS系统。我们的结果与经过预测MOS得分的最先进模型相比有利。
translated by 谷歌翻译
借助情境化语言模型的成功,许多研究探讨了这些模型真正学到的知识,并且在哪些情况下仍然失败。这项工作的大部分都集中在特定的NLP任务和学习成果上。很少的研究试图使模型的弱点与特定任务的弱点相结合,并专注于嵌入本身及其学习方式。在本文中,我们抓住了这一研究机会:基于理论语言见解,我们探讨了功能词的语义限制是否是学习的,以及周围环境如何影响其嵌入。我们创建合适的数据集,为LMS VIS-VIS功能单词的内部工作提供新的见解,并实施辅助视觉网络界面以进行定性分析。
translated by 谷歌翻译
基于音频的色情检测可以通过利用不同的光谱特征来实现有效的成人内容过滤。为了改善它,我们根据不同的神经体系结构和声学特征探索色情声音建模。我们发现,经过对数频谱图训练的CNN可以在色情800数据集上实现最佳性能。我们的实验结果还表明,对数MEL频谱图可以为模型识别色情声音提供更好的表示。最后,为了对整个音频波形进行分类,而不是段,我们采用了投票段到原告技术,从而产生最佳的音频级检测结果。
translated by 谷歌翻译